cybersecurity tool
ChatGPT maker quietly changes rules to allow the US military to incorporate its technology
OpenAI, the maker of ChatGPT, has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes - and revealed that it is already working with the Department of Defense. Experts have previously voiced fears that AI could escalate conflicts around the world thanks to'slaughterbots' which can kill without any human intervention. The rule change, which occurred after Wednesday last week, removed a sentence which said that the company would not permit usage of models for'activity that has high risk of physical harm, including: weapons development, military and warfare.' The spokesman said: 'Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. 'There are, however, national security use cases that align with our mission.
- North America > United States (1.00)
- Europe > United Kingdom (0.05)
- Europe > Ukraine (0.05)
- (4 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.30)
OpenAI Working With U.S. Military on Cybersecurity Tools
OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities, a departure from the startup's earlier ban on providing its artificial intelligence to militaries. The ChatGPT maker is developing tools with the U.S. Defense Department on open-source cybersecurity software -- collaborating with DARPA for its AI Cyber Challenge announced last year -- and has had initial talks with the US government about methods to assist with preventing veteran suicide, Anna Makanju, the company's vice president of global affairs, said in an interview at Bloomberg House at the World Economic Forum in Davos on Tuesday. The company had recently removed language in its terms of service banning its AI from "military and warfare" applications. Makanju described the decision as part of a broader update of its policies to adjust to new uses of ChatGPT and its other tools. "Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world," she said.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.94)
Integration of ChatGPT with Cybersecurity Tools for Enhanced Security Management
Cybersecurity has become one of the major concerns in the digital world. The constant evolution of technology and the increase in the number of cyberattacks have made it necessary for organizations to take proper measures to protect their assets and data. The integration of advanced AI tools like ChatGPT with cybersecurity tools can help in enhancing the security management and provide better protection against cyber threats. ChatGPT is a language model developed by OpenAI that can generate human-like text based on the input given to it. It is trained on a vast amount of data from the internet and can perform tasks such as answering questions, generating text, and translating languages. ChatGPT can be integrated with cybersecurity tools to enhance the security management process.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
HOW DEEP LEARNING CYBER SECURITY
The threat of cyber attacks has recently increased dramatically and traditional measures now appear to be insufficiently competent. Because of this, deep learning in cyber security is rapidly gaining ground and may hold the key to solving all your cybersecurity issues. With the advent of technology, there is also an increase in threats to data security and the need to protect an organization's operations using cybersecurity tools. However, companies are struggling due to most cybersecurity tools being dependent. They rely on signatures or evidence of compromise for the threat detection capabilities of the technologies they use to safeguard their business.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Ordr nabs $40M to monitor connected devices for anomalies – TechCrunch
In 2015, there were approximately 3.5 billion internet of things (IoT) devices in use. Today, the number stands around 35 billion, and is expected to eclipse 75 billion by 2025. IoT devices range from connected blood pressure monitors to industrial temperature sensors, and they're indispensable. The challenge was the driving force behind Ordr, a startup focused on network-level device security. Pandian Gnanaprakasam and Sheausong Yang -- who between them had tenures at Cisco, Aruba Networks, and AT&T Bell Labs -- co-founded Ordr in 2015 to address what they call the "visibility gap" in enterprise networks.
- North America > Aruba (0.26)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Military > Cyberwarfare (0.34)
The Rise Of AI & Cyber Security - 8 Must Try Tools In 2022
AI IS ON THE RISE, AI and Machine Learning are shaping the future in Cybersecurity, hence, leading to RISE OF AI-based cybersecurity tools as a phoenix. With a recent study finding that nearly two-thirds of organizations are using or plan to use AI capabilities in their security operations by 2023. The days of deliberate, human-driven malware attacks are fading fast. We're now seeing a surge in AI-powered attacks that can bypass even the most sophisticated security defenses. No wonder, then, that the demand for artificial intelligence cybersecurity tools is also growing at an exponential rate.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Insider Threat Mitigation: The Role of AI and ML
It needs no telling how damaging insider threats can be. Amongst its numerous impacts, the most significant involve the loss of critical data and operational disruption, according to statistics from the Bitglass 2020 Insider Threat Report. Insider threats can also damage a company's reputation and make it lose its competitive edge. Insider threat mitigation is difficult because the actors are trusted agents, who often have legitimate access to company data. As most legacy tools have failed us, many cybersecurity experts agree that it is time to move on.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.48)
Why AI is your best defense against cyber attacks
Cybersecurity is a constant concern for businesses of all sizes. There are countless threats to organizational data by a growing host of bad actors, and the risks of a cyberattack on your business are only growing. A recent study found that 76 percent of U.S. businesses had experienced a cyberattack last year alone. Given the large number of remote workers logging into company files from unsecured networks with no IT supervision, it's not a question of "if" but "when" your company will become infiltrated. For most businesses, well-known hacks like ransomware or phishing are top of mind.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Exploring the cutting edge of AI in cybersecurity ZDNet
With the number of cybersecurity threats increasing daily, the ability of today's cybersecurity tools and human cybersecurity teams to keep pace is being overwhelmed by an avalanche of malware. According to Cap Gemini's 2019 Reinventing Cybersecurity with Artificial Intelligence: The new frontier in digital security report, 56% of survey respondents said their cybersecurity analysts cannot keep pace with the increasing number and sophistication of attacks; 23% said they cannot properly investigate all the incidents that impact their organization; and 42% said they are seeing an increase in attacks against "time-sensitive" applications like control systems for cars and airplanes. Special report: Cybersecurity: Let's get tactical (free PDF) This ebook, based on the latest ZDNet / TechRepublic special feature, explores how organizations must adapt their security techniques, strengthen end-user training, and embrace new technologies like AI- and ML-powered defenses. "In the Internet Age, with hackers' ability to commit theft or cause harm remotely, shielding assets and operations from those who intend harm has become more difficult than ever," the report states. "The numbers are staggering -- Cisco alone reported that, in 2018, they blocked seven trillion threats on behalf of their customers. With such ever-increasing threats, organizations need help. Some organizations are turning to AI [artificial intelligence], not so much to completely solve their problems (yet), but rather to shore up the defenses."
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Machine-Learning Tips for Companies that Don't Want to Upset or Annoy Their Employees: Eye on A.I.
Although many companies talk about artificial intelligence, it's likely that the majority of their employees aren't actually using machine-learning technologies in the workplace. One big reason for that is while executives may be excited about A.I., employees may feel threatened or even insulted that managers would force them to use tools that that they fear will one day replace them. As FedEx senior data scientist Clayton Clouse said during an A.I. conference in San Francisco last week, "We shouldn't expect that people will jump up and down and be excited when we say, 'Hey, we're going to be augmenting your job with A.I.'" Citing a survey about A.I. from McKinsey, Clouse said that while the majority of companies polled by the consulting firm said they were implementing A.I. either in their business or through pilot projects, "only 6% reported that their employees were actually using the system the way they should be used." The employees, it turns out, are skeptical about A.I., especially machine-learning tools intended to automate decision-making in some way, Clouse said. If workers don't trust the A.I. tools to do as good of a job as them, they simply aren't going to use them, he explained.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Europe > United Kingdom (0.15)
- North America > Canada > Alberta (0.05)
- (3 more...)